Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications.
In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using this dataset from Oxford of 102 flower categories, you can see a few examples below.

The project is broken down into multiple steps:
We'll lead you through each part which you'll implement in Python.
When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.
# Necessary imports:
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
tfds.disable_progress_bar()
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
from PIL import Image
%config InlineBackend.figure_format = 'retina'
import numpy as np
import matplotlib.pyplot as plt
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
import pandas as pd
import numpy as np
import json
import time
from tensorflow.keras.preprocessing.image import ImageDataGenerator
print('Using:')
print('\t\u2022 TensorFlow version:', tf.__version__)
print('\t\u2022 tf.keras version:', tf.keras.__version__)
print('\t\u2022 Running on GPU' if tf.test.is_gpu_available() else '\t\u2022 GPU device not found. Running on CPU')
Using: • TensorFlow version: 2.6.0 • tf.keras version: 2.6.0 • Running on GPU
Here you'll use tensorflow_datasets to load the Oxford Flowers 102 dataset. This dataset has 3 splits: 'train', 'test', and 'validation'. You'll also need to make sure the training data is normalized and resized to 224x224 pixels as required by the pre-trained networks.
The validation and testing sets are used to measure the model's performance on data it hasn't seen yet, but you'll still need to normalize and resize the images to the appropriate size.
# Downloading data to default local directory "~/tensorflow_datasets":
!python -m tensorflow_datasets.scripts.download_and_prepare --register_checksums=True --datasets=oxford_flowers102
# Loading the dataset with TensorFlow Datasets:
dataset, dataset_info = tfds.load('oxford_flowers102', with_info=True, as_supervised=True)
# Creation of a training set, a validation set and a test set:
training_set, validation_set,test_set = dataset['test'], dataset['validation'], dataset['train']
2021-09-30 16:58:18.805090: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-30 16:58:18.817590: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-30 16:58:18.818419: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I0930 16:58:18.819352 139841992116096 download_and_prepare.py:200] Running download_and_prepare for dataset(s): oxford_flowers102 2021-09-30 16:58:18.823373: I tensorflow/core/platform/cloud/google_auth_provider.cc:180] Attempting an empty bearer token since no token was retrieved from files, and GCE metadata check was skipped. 2021-09-30 16:58:18.871128: I tensorflow/core/platform/cloud/google_auth_provider.cc:180] Attempting an empty bearer token since no token was retrieved from files, and GCE metadata check was skipped. 2021-09-30 16:58:18.910242: I tensorflow/core/platform/cloud/google_auth_provider.cc:180] Attempting an empty bearer token since no token was retrieved from files, and GCE metadata check was skipped. I0930 16:58:18.954754 139841992116096 dataset_info.py:434] Load pre-computed DatasetInfo (eg: splits, num examples,...) from GCS: oxford_flowers102/2.1.1 2021-09-30 16:58:18.962994: I tensorflow/core/platform/cloud/google_auth_provider.cc:180] Attempting an empty bearer token since no token was retrieved from files, and GCE metadata check was skipped. 2021-09-30 16:58:19.008056: I tensorflow/core/platform/cloud/google_auth_provider.cc:180] Attempting an empty bearer token since no token was retrieved from files, and GCE metadata check was skipped. 2021-09-30 16:58:19.054868: I tensorflow/core/platform/cloud/google_auth_provider.cc:180] Attempting an empty bearer token since no token was retrieved from files, and GCE metadata check was skipped. I0930 16:58:19.104642 139841992116096 dataset_info.py:361] Load dataset info from /tmp/tmp_2lht_uwtfds I0930 16:58:19.115369 139841992116096 download_and_prepare.py:138] download_and_prepare for dataset oxford_flowers102/2.1.1... I0930 16:58:19.115839 139841992116096 dataset_builder.py:357] Generating dataset oxford_flowers102 (/root/tensorflow_datasets/oxford_flowers102/2.1.1) Downloading and preparing dataset oxford_flowers102/2.1.1 (download: 328.90 MiB, generated: 331.34 MiB, total: 660.25 MiB) to /root/tensorflow_datasets/oxford_flowers102/2.1.1... 2021-09-30 16:58:19.286937: I tensorflow/core/platform/cloud/google_auth_provider.cc:180] Attempting an empty bearer token since no token was retrieved from files, and GCE metadata check was skipped. 2021-09-30 16:58:19.329358: I tensorflow/core/platform/cloud/google_auth_provider.cc:180] Attempting an empty bearer token since no token was retrieved from files, and GCE metadata check was skipped. Dl Completed...: 0 url [00:00, ? url/s] Dl Size...: 0 MiB [00:00, ? MiB/s] Extraction completed...: 0 file [00:00, ? file/s]I0930 16:58:19.378439 139841992116096 download_manager.py:476] Downloading https://www.robots.ox.ac.uk/~vgg/data/flowers/102/102flowers.tgz into /root/tensorflow_datasets/downloads/robots.ox.ac.uk_vgg_flowers_102_102flowersoWedSp98maBn1wypsDib6T-q2NVbO40fwvTflmPmQpY.tgz.tmp.4ede8885a6fa4e9b9f6829a2e69dec8f... Dl Completed...: 0% 0/1 [00:00<?, ? url/s] Dl Size...: 0 MiB [00:00, ? MiB/s] Extraction completed...: 0 file [00:00, ? file/s]I0930 16:58:19.380779 139841992116096 download_manager.py:476] Downloading https://www.robots.ox.ac.uk/~vgg/data/flowers/102/imagelabels.mat into /root/tensorflow_datasets/downloads/robots.ox.ac.uk_vgg_flowers_102_imagelabelQc558tX8AD-RkJuVyV4EyAI3B3yv3pQFw82vzoHJBkI.mat.tmp.375ce0a4fddc4270b54079d2f7240dfd... Dl Completed...: 0% 0/2 [00:00<?, ? url/s] Dl Size...: 0 MiB [00:00, ? MiB/s] Extraction completed...: 0 file [00:00, ? file/s]I0930 16:58:19.385489 139841992116096 download_manager.py:476] Downloading https://www.robots.ox.ac.uk/~vgg/data/flowers/102/setid.mat into /root/tensorflow_datasets/downloads/robots.ox.ac.uk_vgg_flowers_102_setidSMkjURnWabtzYOtl4t1kAcvHb6vlLbDJOlQsTHUux60.mat.tmp.a09fe29d4025425c97da36e33a58bf9f... Dl Completed...: 0% 0/3 [00:00<?, ? url/s] Dl Size...: 0 MiB [00:00, ? MiB/s] Dl Completed...: 0% 0/3 [00:00<?, ? url/s] Dl Size...: 0 MiB [00:00, ? MiB/s] Dl Completed...: 0% 0/3 [00:00<?, ? url/s] Dl Size...: 0 MiB [00:00, ? MiB/s] Dl Completed...: 33% 1/3 [00:00<00:00, 2.24 url/s] Dl Size...: 0 MiB [00:00, ? MiB/s] Extraction completed...: 0 file [00:00, ? file/s]I0930 16:58:19.824786 139839859611392 download_manager.py:495] Skipping extraction for /root/tensorflow_datasets/downloads/robots.ox.ac.uk_vgg_flowers_102_imagelabelSQPpQga6wjv3cqrfBkUZFt9WtY_Eg6YtsyqXuCZWZR0.mat (method=NO_EXTRACT). Dl Completed...: 33% 1/3 [00:00<00:00, 2.24 url/s] Dl Size...: 0% 0/328 [00:00<?, ? MiB/s] Dl Completed...: 67% 2/3 [00:00<00:00, 2.24 url/s] Dl Size...: 0% 0/328 [00:00<?, ? MiB/s] Extraction completed...: 0 file [00:00, ? file/s]I0930 16:58:19.918646 139839851218688 download_manager.py:495] Skipping extraction for /root/tensorflow_datasets/downloads/robots.ox.ac.uk_vgg_flowers_102_setidRrhnj5H9ldPI9P6rgNJxpsg0od2Jb-Kf0-atnOXI3M0.mat (method=NO_EXTRACT). Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 0% 1/328 [00:01<06:03, 1.11s/ MiB] Extraction completed...: 0 file [00:01, ? file/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 1% 2/328 [00:01<02:49, 1.93 MiB/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 1% 3/328 [00:01<02:48, 1.93 MiB/s] Extraction completed...: 0 file [00:01, ? file/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 1% 4/328 [00:01<01:13, 4.40 MiB/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 2% 5/328 [00:01<01:13, 4.40 MiB/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 2% 6/328 [00:01<01:13, 4.40 MiB/s] Extraction completed...: 0 file [00:01, ? file/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 2% 7/328 [00:01<00:37, 8.49 MiB/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 2% 8/328 [00:01<00:37, 8.49 MiB/s] Extraction completed...: 0 file [00:01, ? file/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 3% 9/328 [00:01<00:29, 10.66 MiB/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 3% 10/328 [00:01<00:29, 10.66 MiB/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 3% 11/328 [00:01<00:29, 10.66 MiB/s] Extraction completed...: 0 file [00:01, ? file/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 4% 12/328 [00:01<00:25, 12.62 MiB/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 4% 13/328 [00:01<00:24, 12.62 MiB/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 4% 14/328 [00:01<00:24, 12.62 MiB/s] Extraction completed...: 0 file [00:01, ? file/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 5% 15/328 [00:01<00:19, 15.93 MiB/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 5% 16/328 [00:01<00:19, 15.93 MiB/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 5% 17/328 [00:01<00:19, 15.93 MiB/s] Extraction completed...: 0 file [00:01, ? file/s] Dl Completed...: 67% 2/3 [00:01<00:00, 2.24 url/s] Dl Size...: 5% 18/328 [00:01<00:18, 16.34 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 6% 19/328 [00:02<00:18, 16.34 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 6% 20/328 [00:02<00:18, 16.34 MiB/s] Extraction completed...: 0 file [00:02, ? file/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 6% 21/328 [00:02<00:16, 18.61 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 7% 22/328 [00:02<00:16, 18.61 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 7% 23/328 [00:02<00:16, 18.61 MiB/s] Extraction completed...: 0 file [00:02, ? file/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 7% 24/328 [00:02<00:16, 18.13 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 8% 25/328 [00:02<00:16, 18.13 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 8% 26/328 [00:02<00:16, 18.13 MiB/s] Extraction completed...: 0 file [00:02, ? file/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 8% 27/328 [00:02<00:15, 20.00 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 9% 28/328 [00:02<00:14, 20.00 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 9% 29/328 [00:02<00:14, 20.00 MiB/s] Extraction completed...: 0 file [00:02, ? file/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 9% 30/328 [00:02<00:13, 21.86 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 9% 31/328 [00:02<00:13, 21.86 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 10% 32/328 [00:02<00:13, 21.86 MiB/s] Extraction completed...: 0 file [00:02, ? file/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 10% 33/328 [00:02<00:14, 20.17 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 10% 34/328 [00:02<00:14, 20.17 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 11% 35/328 [00:02<00:14, 20.17 MiB/s] Extraction completed...: 0 file [00:02, ? file/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 11% 36/328 [00:02<00:13, 21.79 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 11% 37/328 [00:02<00:13, 21.79 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 12% 38/328 [00:02<00:13, 21.79 MiB/s] Extraction completed...: 0 file [00:02, ? file/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 12% 39/328 [00:02<00:12, 23.11 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 12% 40/328 [00:02<00:12, 23.11 MiB/s] Dl Completed...: 67% 2/3 [00:02<00:00, 2.24 url/s] Dl Size...: 12% 41/328 [00:02<00:12, 23.11 MiB/s] Extraction completed...: 0 file [00:02, ? file/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 13% 42/328 [00:03<00:11, 24.83 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 13% 43/328 [00:03<00:11, 24.83 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 13% 44/328 [00:03<00:11, 24.83 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 14% 45/328 [00:03<00:11, 24.83 MiB/s] Extraction completed...: 0 file [00:03, ? file/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 14% 46/328 [00:03<00:11, 24.66 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 14% 47/328 [00:03<00:11, 24.66 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 15% 48/328 [00:03<00:11, 24.66 MiB/s] Extraction completed...: 0 file [00:03, ? file/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 15% 49/328 [00:03<00:10, 25.76 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 15% 50/328 [00:03<00:10, 25.76 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 16% 51/328 [00:03<00:10, 25.76 MiB/s] Extraction completed...: 0 file [00:03, ? file/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 16% 52/328 [00:03<00:10, 26.74 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 16% 53/328 [00:03<00:10, 26.74 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 16% 54/328 [00:03<00:10, 26.74 MiB/s] Extraction completed...: 0 file [00:03, ? file/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 17% 55/328 [00:03<00:09, 27.45 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 17% 56/328 [00:03<00:09, 27.45 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 17% 57/328 [00:03<00:09, 27.45 MiB/s] Extraction completed...: 0 file [00:03, ? file/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 18% 58/328 [00:03<00:09, 27.55 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 18% 59/328 [00:03<00:09, 27.55 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 18% 60/328 [00:03<00:09, 27.55 MiB/s] Extraction completed...: 0 file [00:03, ? file/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 19% 61/328 [00:03<00:09, 27.96 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 19% 62/328 [00:03<00:09, 27.96 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 19% 63/328 [00:03<00:09, 27.96 MiB/s] Extraction completed...: 0 file [00:03, ? file/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 20% 64/328 [00:03<00:09, 28.05 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 20% 65/328 [00:03<00:09, 28.05 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 20% 66/328 [00:03<00:09, 28.05 MiB/s] Extraction completed...: 0 file [00:03, ? file/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 20% 67/328 [00:03<00:09, 26.15 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 21% 68/328 [00:03<00:09, 26.15 MiB/s] Dl Completed...: 67% 2/3 [00:03<00:00, 2.24 url/s] Dl Size...: 21% 69/328 [00:03<00:09, 26.15 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 21% 70/328 [00:04<00:09, 26.15 MiB/s] Extraction completed...: 0 file [00:04, ? file/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 22% 71/328 [00:04<00:09, 26.96 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 22% 72/328 [00:04<00:09, 26.96 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 22% 73/328 [00:04<00:09, 26.96 MiB/s] Extraction completed...: 0 file [00:04, ? file/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 23% 74/328 [00:04<00:09, 27.54 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 23% 75/328 [00:04<00:09, 27.54 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 23% 76/328 [00:04<00:09, 27.54 MiB/s] Extraction completed...: 0 file [00:04, ? file/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 23% 77/328 [00:04<00:08, 27.92 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 24% 78/328 [00:04<00:08, 27.92 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 24% 79/328 [00:04<00:08, 27.92 MiB/s] Extraction completed...: 0 file [00:04, ? file/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 24% 80/328 [00:04<00:08, 27.94 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 25% 81/328 [00:04<00:08, 27.94 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 25% 82/328 [00:04<00:08, 27.94 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 25% 83/328 [00:04<00:08, 27.94 MiB/s] Extraction completed...: 0 file [00:04, ? file/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 26% 84/328 [00:04<00:08, 28.87 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 26% 85/328 [00:04<00:08, 28.87 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 26% 86/328 [00:04<00:08, 28.87 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 27% 87/328 [00:04<00:08, 28.87 MiB/s] Extraction completed...: 0 file [00:04, ? file/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 27% 88/328 [00:04<00:08, 27.50 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 27% 89/328 [00:04<00:08, 27.50 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 27% 90/328 [00:04<00:08, 27.50 MiB/s] Extraction completed...: 0 file [00:04, ? file/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 28% 91/328 [00:04<00:08, 27.89 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 28% 92/328 [00:04<00:08, 27.89 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 28% 93/328 [00:04<00:08, 27.89 MiB/s] Extraction completed...: 0 file [00:04, ? file/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 29% 94/328 [00:04<00:08, 28.31 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 29% 95/328 [00:04<00:08, 28.31 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 29% 96/328 [00:04<00:08, 28.31 MiB/s] Dl Completed...: 67% 2/3 [00:04<00:00, 2.24 url/s] Dl Size...: 30% 97/328 [00:04<00:08, 28.31 MiB/s] Extraction completed...: 0 file [00:04, ? file/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 30% 98/328 [00:05<00:07, 29.66 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 30% 99/328 [00:05<00:07, 29.66 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 30% 100/328 [00:05<00:07, 29.66 MiB/s] Extraction completed...: 0 file [00:05, ? file/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 31% 101/328 [00:05<00:07, 29.24 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 31% 102/328 [00:05<00:07, 29.24 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 31% 103/328 [00:05<00:07, 29.24 MiB/s] Extraction completed...: 0 file [00:05, ? file/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 32% 104/328 [00:05<00:07, 29.24 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 32% 105/328 [00:05<00:07, 29.24 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 32% 106/328 [00:05<00:07, 29.24 MiB/s] Extraction completed...: 0 file [00:05, ? file/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 33% 107/328 [00:05<00:07, 29.21 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 33% 108/328 [00:05<00:07, 29.21 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 33% 109/328 [00:05<00:07, 29.21 MiB/s] Extraction completed...: 0 file [00:05, ? file/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 34% 110/328 [00:05<00:07, 29.08 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 34% 111/328 [00:05<00:07, 29.08 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 34% 112/328 [00:05<00:07, 29.08 MiB/s] Extraction completed...: 0 file [00:05, ? file/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 34% 113/328 [00:05<00:07, 28.73 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 35% 114/328 [00:05<00:07, 28.73 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 35% 115/328 [00:05<00:07, 28.73 MiB/s] Extraction completed...: 0 file [00:05, ? file/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 35% 116/328 [00:05<00:07, 27.44 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 36% 117/328 [00:05<00:07, 27.44 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 36% 118/328 [00:05<00:07, 27.44 MiB/s] Extraction completed...: 0 file [00:05, ? file/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 36% 119/328 [00:05<00:07, 28.04 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 37% 120/328 [00:05<00:07, 28.04 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 37% 121/328 [00:05<00:07, 28.04 MiB/s] Extraction completed...: 0 file [00:05, ? file/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 37% 122/328 [00:05<00:07, 28.27 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 38% 123/328 [00:05<00:07, 28.27 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 38% 124/328 [00:05<00:07, 28.27 MiB/s] Dl Completed...: 67% 2/3 [00:05<00:00, 2.24 url/s] Dl Size...: 38% 125/328 [00:05<00:07, 28.27 MiB/s] Extraction completed...: 0 file [00:05, ? file/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 38% 126/328 [00:06<00:07, 27.80 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 39% 127/328 [00:06<00:07, 27.80 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 39% 128/328 [00:06<00:07, 27.80 MiB/s] Extraction completed...: 0 file [00:06, ? file/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 39% 129/328 [00:06<00:07, 27.83 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 40% 130/328 [00:06<00:07, 27.83 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 40% 131/328 [00:06<00:07, 27.83 MiB/s] Extraction completed...: 0 file [00:06, ? file/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 40% 132/328 [00:06<00:06, 28.02 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 41% 133/328 [00:06<00:06, 28.02 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 41% 134/328 [00:06<00:06, 28.02 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 41% 135/328 [00:06<00:06, 28.02 MiB/s] Extraction completed...: 0 file [00:06, ? file/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 41% 136/328 [00:06<00:06, 29.66 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 42% 137/328 [00:06<00:06, 29.66 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 42% 138/328 [00:06<00:06, 29.66 MiB/s] Extraction completed...: 0 file [00:06, ? file/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 42% 139/328 [00:06<00:06, 29.39 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 43% 140/328 [00:06<00:06, 29.39 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 43% 141/328 [00:06<00:06, 29.39 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 43% 142/328 [00:06<00:06, 29.39 MiB/s] Extraction completed...: 0 file [00:06, ? file/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 44% 143/328 [00:06<00:06, 27.82 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 44% 144/328 [00:06<00:06, 27.82 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 44% 145/328 [00:06<00:06, 27.82 MiB/s] Extraction completed...: 0 file [00:06, ? file/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 45% 146/328 [00:06<00:06, 27.63 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 45% 147/328 [00:06<00:06, 27.63 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 45% 148/328 [00:06<00:06, 27.63 MiB/s] Extraction completed...: 0 file [00:06, ? file/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 45% 149/328 [00:06<00:06, 28.08 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 46% 150/328 [00:06<00:06, 28.08 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 46% 151/328 [00:06<00:06, 28.08 MiB/s] Extraction completed...: 0 file [00:06, ? file/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 46% 152/328 [00:06<00:06, 28.31 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 47% 153/328 [00:06<00:06, 28.31 MiB/s] Dl Completed...: 67% 2/3 [00:06<00:00, 2.24 url/s] Dl Size...: 47% 154/328 [00:06<00:06, 28.31 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 47% 155/328 [00:07<00:06, 28.31 MiB/s] Extraction completed...: 0 file [00:07, ? file/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 48% 156/328 [00:07<00:05, 29.08 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 48% 157/328 [00:07<00:05, 29.08 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 48% 158/328 [00:07<00:05, 29.08 MiB/s] Extraction completed...: 0 file [00:07, ? file/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 48% 159/328 [00:07<00:06, 27.43 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 49% 160/328 [00:07<00:06, 27.43 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 49% 161/328 [00:07<00:06, 27.43 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 49% 162/328 [00:07<00:06, 27.43 MiB/s] Extraction completed...: 0 file [00:07, ? file/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 50% 163/328 [00:07<00:05, 27.91 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 50% 164/328 [00:07<00:05, 27.91 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 50% 165/328 [00:07<00:05, 27.91 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 51% 166/328 [00:07<00:05, 27.91 MiB/s] Extraction completed...: 0 file [00:07, ? file/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 51% 167/328 [00:07<00:05, 29.45 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 51% 168/328 [00:07<00:05, 29.45 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 52% 169/328 [00:07<00:05, 29.45 MiB/s] Extraction completed...: 0 file [00:07, ? file/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 52% 170/328 [00:07<00:05, 29.54 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 52% 171/328 [00:07<00:05, 29.54 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 52% 172/328 [00:07<00:05, 29.54 MiB/s] Extraction completed...: 0 file [00:07, ? file/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 53% 173/328 [00:07<00:05, 27.87 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 53% 174/328 [00:07<00:05, 27.87 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 53% 175/328 [00:07<00:05, 27.87 MiB/s] Extraction completed...: 0 file [00:07, ? file/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 54% 176/328 [00:07<00:05, 28.37 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 54% 177/328 [00:07<00:05, 28.37 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 54% 178/328 [00:07<00:05, 28.37 MiB/s] Extraction completed...: 0 file [00:07, ? file/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 55% 179/328 [00:07<00:05, 28.76 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 55% 180/328 [00:07<00:05, 28.76 MiB/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 55% 181/328 [00:07<00:05, 28.76 MiB/s] Extraction completed...: 0 file [00:07, ? file/s] Dl Completed...: 67% 2/3 [00:07<00:00, 2.24 url/s] Dl Size...: 55% 182/328 [00:07<00:05, 28.96 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 56% 183/328 [00:08<00:05, 28.96 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 56% 184/328 [00:08<00:04, 28.96 MiB/s] Extraction completed...: 0 file [00:08, ? file/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 56% 185/328 [00:08<00:04, 28.77 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 57% 186/328 [00:08<00:04, 28.77 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 57% 187/328 [00:08<00:04, 28.77 MiB/s] Extraction completed...: 0 file [00:08, ? file/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 57% 188/328 [00:08<00:04, 28.56 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 58% 189/328 [00:08<00:04, 28.56 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 58% 190/328 [00:08<00:04, 28.56 MiB/s] Extraction completed...: 0 file [00:08, ? file/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 58% 191/328 [00:08<00:04, 27.94 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 59% 192/328 [00:08<00:04, 27.94 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 59% 193/328 [00:08<00:04, 27.94 MiB/s] Extraction completed...: 0 file [00:08, ? file/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 59% 194/328 [00:08<00:04, 27.95 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 59% 195/328 [00:08<00:04, 27.95 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 60% 196/328 [00:08<00:04, 27.95 MiB/s] Extraction completed...: 0 file [00:08, ? file/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 60% 197/328 [00:08<00:04, 28.25 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 60% 198/328 [00:08<00:04, 28.25 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 61% 199/328 [00:08<00:04, 28.25 MiB/s] Extraction completed...: 0 file [00:08, ? file/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 61% 200/328 [00:08<00:04, 28.43 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 61% 201/328 [00:08<00:04, 28.43 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 62% 202/328 [00:08<00:04, 28.43 MiB/s] Extraction completed...: 0 file [00:08, ? file/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 62% 203/328 [00:08<00:04, 28.79 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 62% 204/328 [00:08<00:04, 28.79 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 62% 205/328 [00:08<00:04, 28.79 MiB/s] Extraction completed...: 0 file [00:08, ? file/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 63% 206/328 [00:08<00:04, 29.03 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 63% 207/328 [00:08<00:04, 29.03 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 63% 208/328 [00:08<00:04, 29.03 MiB/s] Extraction completed...: 0 file [00:08, ? file/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 64% 209/328 [00:08<00:04, 27.37 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 64% 210/328 [00:08<00:04, 27.37 MiB/s] Dl Completed...: 67% 2/3 [00:08<00:00, 2.24 url/s] Dl Size...: 64% 211/328 [00:08<00:04, 27.37 MiB/s] Extraction completed...: 0 file [00:08, ? file/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 65% 212/328 [00:09<00:04, 27.85 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 65% 213/328 [00:09<00:04, 27.85 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 65% 214/328 [00:09<00:04, 27.85 MiB/s] Extraction completed...: 0 file [00:09, ? file/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 66% 215/328 [00:09<00:04, 28.01 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 66% 216/328 [00:09<00:03, 28.01 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 66% 217/328 [00:09<00:03, 28.01 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 66% 218/328 [00:09<00:03, 28.01 MiB/s] Extraction completed...: 0 file [00:09, ? file/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 67% 219/328 [00:09<00:03, 29.84 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 67% 220/328 [00:09<00:03, 29.84 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 67% 221/328 [00:09<00:03, 29.84 MiB/s] Extraction completed...: 0 file [00:09, ? file/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 68% 222/328 [00:09<00:03, 29.22 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 68% 223/328 [00:09<00:03, 29.22 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 68% 224/328 [00:09<00:03, 29.22 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 69% 225/328 [00:09<00:03, 29.22 MiB/s] Extraction completed...: 0 file [00:09, ? file/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 69% 226/328 [00:09<00:03, 28.18 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 69% 227/328 [00:09<00:03, 28.18 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 70% 228/328 [00:09<00:03, 28.18 MiB/s] Extraction completed...: 0 file [00:09, ? file/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 70% 229/328 [00:09<00:03, 28.52 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 70% 230/328 [00:09<00:03, 28.52 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 70% 231/328 [00:09<00:03, 28.52 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 71% 232/328 [00:09<00:03, 28.52 MiB/s] Extraction completed...: 0 file [00:09, ? file/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 71% 233/328 [00:09<00:03, 28.55 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 71% 234/328 [00:09<00:03, 28.55 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 72% 235/328 [00:09<00:03, 28.55 MiB/s] Extraction completed...: 0 file [00:09, ? file/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 72% 236/328 [00:09<00:03, 28.52 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 72% 237/328 [00:09<00:03, 28.52 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 73% 238/328 [00:09<00:03, 28.52 MiB/s] Extraction completed...: 0 file [00:09, ? file/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 73% 239/328 [00:09<00:03, 28.54 MiB/s] Dl Completed...: 67% 2/3 [00:09<00:00, 2.24 url/s] Dl Size...: 73% 240/328 [00:09<00:03, 28.54 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 73% 241/328 [00:10<00:03, 28.54 MiB/s] Extraction completed...: 0 file [00:10, ? file/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 74% 242/328 [00:10<00:03, 28.51 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 74% 243/328 [00:10<00:02, 28.51 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 74% 244/328 [00:10<00:02, 28.51 MiB/s] Extraction completed...: 0 file [00:10, ? file/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 75% 245/328 [00:10<00:02, 28.76 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 75% 246/328 [00:10<00:02, 28.76 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 75% 247/328 [00:10<00:02, 28.76 MiB/s] Extraction completed...: 0 file [00:10, ? file/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 76% 248/328 [00:10<00:02, 28.71 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 76% 249/328 [00:10<00:02, 28.71 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 76% 250/328 [00:10<00:02, 28.71 MiB/s] Extraction completed...: 0 file [00:10, ? file/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 77% 251/328 [00:10<00:02, 28.71 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 77% 252/328 [00:10<00:02, 28.71 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 77% 253/328 [00:10<00:02, 28.71 MiB/s] Extraction completed...: 0 file [00:10, ? file/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 77% 254/328 [00:10<00:02, 28.34 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 78% 255/328 [00:10<00:02, 28.34 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 78% 256/328 [00:10<00:02, 28.34 MiB/s] Extraction completed...: 0 file [00:10, ? file/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 78% 257/328 [00:10<00:02, 28.74 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 79% 258/328 [00:10<00:02, 28.74 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 79% 259/328 [00:10<00:02, 28.74 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 79% 260/328 [00:10<00:02, 28.74 MiB/s] Extraction completed...: 0 file [00:10, ? file/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 80% 261/328 [00:10<00:02, 27.19 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 80% 262/328 [00:10<00:02, 27.19 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 80% 263/328 [00:10<00:02, 27.19 MiB/s] Extraction completed...: 0 file [00:10, ? file/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 80% 264/328 [00:10<00:02, 27.67 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 81% 265/328 [00:10<00:02, 27.67 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 81% 266/328 [00:10<00:02, 27.67 MiB/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 81% 267/328 [00:10<00:02, 27.67 MiB/s] Extraction completed...: 0 file [00:10, ? file/s] Dl Completed...: 67% 2/3 [00:10<00:00, 2.24 url/s] Dl Size...: 82% 268/328 [00:10<00:02, 28.98 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 82% 269/328 [00:11<00:02, 28.98 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 82% 270/328 [00:11<00:02, 28.98 MiB/s] Extraction completed...: 0 file [00:11, ? file/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 83% 271/328 [00:11<00:01, 28.99 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 83% 272/328 [00:11<00:01, 28.99 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 83% 273/328 [00:11<00:01, 28.99 MiB/s] Extraction completed...: 0 file [00:11, ? file/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 84% 274/328 [00:11<00:01, 27.68 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 84% 275/328 [00:11<00:01, 27.68 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 84% 276/328 [00:11<00:01, 27.68 MiB/s] Extraction completed...: 0 file [00:11, ? file/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 84% 277/328 [00:11<00:01, 26.56 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 85% 278/328 [00:11<00:01, 26.56 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 85% 279/328 [00:11<00:01, 26.56 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 85% 280/328 [00:11<00:01, 26.56 MiB/s] Extraction completed...: 0 file [00:11, ? file/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 86% 281/328 [00:11<00:01, 27.95 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 86% 282/328 [00:11<00:01, 27.95 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 86% 283/328 [00:11<00:01, 27.95 MiB/s] Extraction completed...: 0 file [00:11, ? file/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 87% 284/328 [00:11<00:01, 28.15 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 87% 285/328 [00:11<00:01, 28.15 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 87% 286/328 [00:11<00:01, 28.15 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 88% 287/328 [00:11<00:01, 28.15 MiB/s] Extraction completed...: 0 file [00:11, ? file/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 88% 288/328 [00:11<00:01, 27.35 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 88% 289/328 [00:11<00:01, 27.35 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 88% 290/328 [00:11<00:01, 27.35 MiB/s] Extraction completed...: 0 file [00:11, ? file/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 89% 291/328 [00:11<00:01, 27.83 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 89% 292/328 [00:11<00:01, 27.83 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 89% 293/328 [00:11<00:01, 27.83 MiB/s] Extraction completed...: 0 file [00:11, ? file/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 90% 294/328 [00:11<00:01, 27.99 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 90% 295/328 [00:11<00:01, 27.99 MiB/s] Dl Completed...: 67% 2/3 [00:11<00:00, 2.24 url/s] Dl Size...: 90% 296/328 [00:11<00:01, 27.99 MiB/s] Extraction completed...: 0 file [00:11, ? file/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 91% 297/328 [00:12<00:01, 28.32 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 91% 298/328 [00:12<00:01, 28.32 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 91% 299/328 [00:12<00:01, 28.32 MiB/s] Extraction completed...: 0 file [00:12, ? file/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 91% 300/328 [00:12<00:00, 28.25 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 92% 301/328 [00:12<00:00, 28.25 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 92% 302/328 [00:12<00:00, 28.25 MiB/s] Extraction completed...: 0 file [00:12, ? file/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 92% 303/328 [00:12<00:00, 28.41 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 93% 304/328 [00:12<00:00, 28.41 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 93% 305/328 [00:12<00:00, 28.41 MiB/s] Extraction completed...: 0 file [00:12, ? file/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 93% 306/328 [00:12<00:00, 28.75 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 94% 307/328 [00:12<00:00, 28.75 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 94% 308/328 [00:12<00:00, 28.75 MiB/s] Extraction completed...: 0 file [00:12, ? file/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 94% 309/328 [00:12<00:00, 28.68 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 95% 310/328 [00:12<00:00, 28.68 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 95% 311/328 [00:12<00:00, 28.68 MiB/s] Extraction completed...: 0 file [00:12, ? file/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 95% 312/328 [00:12<00:00, 28.93 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 95% 313/328 [00:12<00:00, 28.93 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 96% 314/328 [00:12<00:00, 28.93 MiB/s] Extraction completed...: 0 file [00:12, ? file/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 96% 315/328 [00:12<00:00, 28.94 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 96% 316/328 [00:12<00:00, 28.94 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 97% 317/328 [00:12<00:00, 28.94 MiB/s] Extraction completed...: 0 file [00:12, ? file/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 97% 318/328 [00:12<00:00, 28.74 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 97% 319/328 [00:12<00:00, 28.74 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 98% 320/328 [00:12<00:00, 28.74 MiB/s] Extraction completed...: 0 file [00:12, ? file/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 98% 321/328 [00:12<00:00, 25.95 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 98% 322/328 [00:12<00:00, 25.95 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 98% 323/328 [00:12<00:00, 25.95 MiB/s] Dl Completed...: 67% 2/3 [00:12<00:00, 2.24 url/s] Dl Size...: 99% 324/328 [00:12<00:00, 25.95 MiB/s] Extraction completed...: 0 file [00:12, ? file/s] Dl Completed...: 67% 2/3 [00:13<00:00, 2.24 url/s] Dl Size...: 99% 325/328 [00:13<00:00, 27.76 MiB/s] Dl Completed...: 67% 2/3 [00:13<00:00, 2.24 url/s] Dl Size...: 99% 326/328 [00:13<00:00, 27.76 MiB/s] Dl Completed...: 67% 2/3 [00:13<00:00, 2.24 url/s] Dl Size...: 100% 327/328 [00:13<00:00, 27.76 MiB/s] Extraction completed...: 0 file [00:13, ? file/s] Dl Completed...: 67% 2/3 [00:13<00:00, 2.24 url/s] Dl Size...: 100% 328/328 [00:13<00:00, 28.10 MiB/s] Dl Completed...: 100% 3/3 [00:13<00:00, 4.82s/ url] Dl Size...: 100% 328/328 [00:13<00:00, 28.10 MiB/s] Dl Completed...: 100% 3/3 [00:13<00:00, 4.82s/ url] Dl Size...: 100% 328/328 [00:13<00:00, 28.10 MiB/s] Extraction completed...: 0% 0/1 [00:13<?, ? file/s] Dl Completed...: 100% 3/3 [00:19<00:00, 4.82s/ url] Dl Size...: 100% 328/328 [00:19<00:00, 28.10 MiB/s] Extraction completed...: 100% 1/1 [00:19<00:00, 19.74s/ file] Extraction completed...: 100% 1/1 [00:19<00:00, 19.74s/ file] Dl Size...: 100% 328/328 [00:19<00:00, 16.61 MiB/s] Dl Completed...: 100% 3/3 [00:19<00:00, 6.58s/ url] I0930 16:58:39.121135 139841992116096 dataset_builder.py:970] Generating split train Shuffling and writing examples to /root/tensorflow_datasets/oxford_flowers102/2.1.1.incompleteT5AUVS/oxford_flowers102-train.tfrecord 0% 0/1020 [00:00<?, ? examples/s]I0930 16:58:39.808935 139841992116096 tfrecords_writer.py:226] Done writing /root/tensorflow_datasets/oxford_flowers102/2.1.1.incompleteT5AUVS/oxford_flowers102-train.tfrecord. Shard lengths: [1020] I0930 16:58:39.815756 139841992116096 dataset_builder.py:970] Generating split test Shuffling and writing examples to /root/tensorflow_datasets/oxford_flowers102/2.1.1.incompleteT5AUVS/oxford_flowers102-test.tfrecord 92% 5674/6149 [00:00<00:00, 12783.80 examples/s]I0930 16:58:43.946317 139841992116096 tfrecords_writer.py:226] Done writing /root/tensorflow_datasets/oxford_flowers102/2.1.1.incompleteT5AUVS/oxford_flowers102-test.tfrecord. Shard lengths: [3074, 3075] I0930 16:58:43.982524 139841992116096 dataset_builder.py:970] Generating split validation Shuffling and writing examples to /root/tensorflow_datasets/oxford_flowers102/2.1.1.incompleteT5AUVS/oxford_flowers102-validation.tfrecord 0% 0/1020 [00:00<?, ? examples/s]I0930 16:58:44.641350 139841992116096 tfrecords_writer.py:226] Done writing /root/tensorflow_datasets/oxford_flowers102/2.1.1.incompleteT5AUVS/oxford_flowers102-validation.tfrecord. Shard lengths: [1020] I0930 16:58:44.648594 139841992116096 dataset_builder.py:412] Skipping computing stats for mode ComputeStatsMode.SKIP. Dataset oxford_flowers102 downloaded and prepared to /root/tensorflow_datasets/oxford_flowers102/2.1.1. Subsequent calls will reuse this data. name: "oxford_flowers102" description: "The Oxford Flowers 102 dataset is a consistent of 102 flower categories commonly occurring\nin the United Kingdom. Each class consists of between 40 and 258 images. The images have\nlarge scale, pose and light variations. In addition, there are categories that have large\nvariations within the category and several very similar categories.\n\nThe dataset is divided into a training set, a validation set and a test set.\nThe training set and validation set each consist of 10 images per class (totalling 1020 images each).\nThe test set consists of the remaining 6149 images (minimum 20 per class)." citation: "@InProceedings{Nilsback08,\n author = \"Nilsback, M-E. and Zisserman, A.\",\n title = \"Automated Flower Classification over a Large Number of Classes\",\n booktitle = \"Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing\",\n year = \"2008\",\n month = \"Dec\"\n}" location { urls: "https://www.robots.ox.ac.uk/~vgg/data/flowers/102/" } schema { feature { name: "file_name" type: BYTES domain: "file_name" presence { min_fraction: 1.0 min_count: 1 } shape { dim { size: 1 } } } feature { name: "image" type: BYTES presence { min_fraction: 1.0 min_count: 1 } shape { dim { size: -1 } dim { size: -1 } dim { size: 3 } } } feature { name: "label" type: INT presence { min_fraction: 1.0 min_count: 1 } shape { dim { size: 1 } } } string_domain { name: "file_name" value: "image_08133.jpg" value: "image_08138.jpg" value: "image_08157.jpg" value: "image_08173.jpg" value: "image_08174.jpg" value: "image_08176.jpg" value: "image_08179.jpg" value: "image_08182.jpg" value: "image_08185.jpg" value: "image_08187.jpg" } } splits { name: "test" statistics { num_examples: 6149 features { type: STRING string_stats { common_stats { num_non_missing: 6149 min_num_values: 1 max_num_values: 1 avg_num_values: 1.0 num_values_histogram { buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } type: QUANTILES } tot_num_values: 6149 } unique: 6149 top_values { value: "image_08189.jpg" frequency: 1.0 } top_values { value: "image_08188.jpg" frequency: 1.0 } top_values { value: "image_08186.jpg" frequency: 1.0 } top_values { value: "image_08184.jpg" frequency: 1.0 } top_values { value: "image_08183.jpg" frequency: 1.0 } top_values { value: "image_08181.jpg" frequency: 1.0 } top_values { value: "image_08180.jpg" frequency: 1.0 } top_values { value: "image_08178.jpg" frequency: 1.0 } top_values { value: "image_08172.jpg" frequency: 1.0 } top_values { value: "image_08171.jpg" frequency: 1.0 } avg_length: 15.0 rank_histogram { buckets { label: "image_08189.jpg" sample_count: 1.0 } buckets { low_rank: 1 high_rank: 1 label: "image_08188.jpg" sample_count: 1.0 } buckets { low_rank: 2 high_rank: 2 label: "image_08186.jpg" sample_count: 1.0 } buckets { low_rank: 3 high_rank: 3 label: "image_08184.jpg" sample_count: 1.0 } buckets { low_rank: 4 high_rank: 4 label: "image_08183.jpg" sample_count: 1.0 } buckets { low_rank: 5 high_rank: 5 label: "image_08181.jpg" sample_count: 1.0 } buckets { low_rank: 6 high_rank: 6 label: "image_08180.jpg" sample_count: 1.0 } buckets { low_rank: 7 high_rank: 7 label: "image_08178.jpg" sample_count: 1.0 } buckets { low_rank: 8 high_rank: 8 label: "image_08172.jpg" sample_count: 1.0 } buckets { low_rank: 9 high_rank: 9 label: "image_08171.jpg" sample_count: 1.0 } } } path { step: "file_name" } } features { type: STRING string_stats { common_stats { num_non_missing: 6149 min_num_values: 1 max_num_values: 1 avg_num_values: 1.0 num_values_histogram { buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } type: QUANTILES } tot_num_values: 6149 } unique: 6147 top_values { value: "__BYTES_VALUE__" frequency: 2.0 } top_values { value: "__BYTES_VALUE__" frequency: 2.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } avg_length: 42333.9453125 rank_histogram { buckets { label: "__BYTES_VALUE__" sample_count: 2.0 } buckets { low_rank: 1 high_rank: 1 label: "__BYTES_VALUE__" sample_count: 2.0 } buckets { low_rank: 2 high_rank: 2 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 3 high_rank: 3 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 4 high_rank: 4 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 5 high_rank: 5 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 6 high_rank: 6 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 7 high_rank: 7 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 8 high_rank: 8 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 9 high_rank: 9 label: "__BYTES_VALUE__" sample_count: 1.0 } } } path { step: "image" } } features { num_stats { common_stats { num_non_missing: 6149 min_num_values: 1 max_num_values: 1 avg_num_values: 1.0 num_values_histogram { buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 614.9 } type: QUANTILES } tot_num_values: 6149 } mean: 57.81362823223289 std_dev: 26.889661542349568 num_zeros: 20 median: 60.0 max: 101.0 histograms { buckets { high_value: 10.1 sample_count: 388.0019 } buckets { low_value: 10.1 high_value: 20.2 sample_count: 388.00190000000003 } buckets { low_value: 20.2 high_value: 30.299999999999997 sample_count: 394.1509 } buckets { low_value: 30.299999999999997 high_value: 40.4 sample_count: 449.4919 } buckets { low_value: 40.4 high_value: 50.5 sample_count: 855.3258999999999 } buckets { low_value: 50.5 high_value: 60.599999999999994 sample_count: 621.6638999999999 } buckets { low_value: 60.599999999999994 high_value: 70.7 sample_count: 418.74690000000004 } buckets { low_value: 70.7 high_value: 80.8 sample_count: 1187.3719 } buckets { low_value: 80.8 high_value: 90.89999999999999 sample_count: 812.2829 } buckets { low_value: 90.89999999999999 high_value: 101.0 sample_count: 633.9619 } } histograms { buckets { high_value: 16.0 sample_count: 614.9 } buckets { low_value: 16.0 high_value: 33.0 sample_count: 614.9 } buckets { low_value: 33.0 high_value: 44.0 sample_count: 614.9 } buckets { low_value: 44.0 high_value: 50.0 sample_count: 614.9 } buckets { low_value: 50.0 high_value: 60.0 sample_count: 614.9 } buckets { low_value: 60.0 high_value: 72.0 sample_count: 614.9 } buckets { low_value: 72.0 high_value: 76.0 sample_count: 614.9 } buckets { low_value: 76.0 high_value: 83.0 sample_count: 614.9 } buckets { low_value: 83.0 high_value: 91.0 sample_count: 614.9 } buckets { low_value: 91.0 high_value: 101.0 sample_count: 614.9 } type: QUANTILES } } path { step: "label" } } } shard_lengths: 3074 shard_lengths: 3075 num_bytes: 260784877 } splits { name: "train" statistics { num_examples: 1020 features { type: STRING string_stats { common_stats { num_non_missing: 1020 min_num_values: 1 max_num_values: 1 avg_num_values: 1.0 num_values_histogram { buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } type: QUANTILES } tot_num_values: 1020 } unique: 1020 top_values { value: "image_08177.jpg" frequency: 1.0 } top_values { value: "image_08175.jpg" frequency: 1.0 } top_values { value: "image_08167.jpg" frequency: 1.0 } top_values { value: "image_08166.jpg" frequency: 1.0 } top_values { value: "image_08165.jpg" frequency: 1.0 } top_values { value: "image_08164.jpg" frequency: 1.0 } top_values { value: "image_08161.jpg" frequency: 1.0 } top_values { value: "image_08154.jpg" frequency: 1.0 } top_values { value: "image_08148.jpg" frequency: 1.0 } top_values { value: "image_08135.jpg" frequency: 1.0 } avg_length: 15.0 rank_histogram { buckets { label: "image_08177.jpg" sample_count: 1.0 } buckets { low_rank: 1 high_rank: 1 label: "image_08175.jpg" sample_count: 1.0 } buckets { low_rank: 2 high_rank: 2 label: "image_08167.jpg" sample_count: 1.0 } buckets { low_rank: 3 high_rank: 3 label: "image_08166.jpg" sample_count: 1.0 } buckets { low_rank: 4 high_rank: 4 label: "image_08165.jpg" sample_count: 1.0 } buckets { low_rank: 5 high_rank: 5 label: "image_08164.jpg" sample_count: 1.0 } buckets { low_rank: 6 high_rank: 6 label: "image_08161.jpg" sample_count: 1.0 } buckets { low_rank: 7 high_rank: 7 label: "image_08154.jpg" sample_count: 1.0 } buckets { low_rank: 8 high_rank: 8 label: "image_08148.jpg" sample_count: 1.0 } buckets { low_rank: 9 high_rank: 9 label: "image_08135.jpg" sample_count: 1.0 } } } path { step: "file_name" } } features { type: STRING string_stats { common_stats { num_non_missing: 1020 min_num_values: 1 max_num_values: 1 avg_num_values: 1.0 num_values_histogram { buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } type: QUANTILES } tot_num_values: 1020 } unique: 1020 top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } avg_length: 42545.15625 rank_histogram { buckets { label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 1 high_rank: 1 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 2 high_rank: 2 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 3 high_rank: 3 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 4 high_rank: 4 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 5 high_rank: 5 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 6 high_rank: 6 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 7 high_rank: 7 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 8 high_rank: 8 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 9 high_rank: 9 label: "__BYTES_VALUE__" sample_count: 1.0 } } } path { step: "image" } } features { num_stats { common_stats { num_non_missing: 1020 min_num_values: 1 max_num_values: 1 avg_num_values: 1.0 num_values_histogram { buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } type: QUANTILES } tot_num_values: 1020 } mean: 50.5 std_dev: 29.443448620476957 num_zeros: 10 median: 51.0 max: 101.0 histograms { buckets { high_value: 10.1 sample_count: 109.242 } buckets { low_value: 10.1 high_value: 20.2 sample_count: 100.06200000000001 } buckets { low_value: 20.2 high_value: 30.299999999999997 sample_count: 100.06200000000001 } buckets { low_value: 30.299999999999997 high_value: 40.4 sample_count: 100.06200000000001 } buckets { low_value: 40.4 high_value: 50.5 sample_count: 100.06200000000001 } buckets { low_value: 50.5 high_value: 60.599999999999994 sample_count: 101.08200000000001 } buckets { low_value: 60.599999999999994 high_value: 70.7 sample_count: 100.06200000000001 } buckets { low_value: 70.7 high_value: 80.8 sample_count: 100.06200000000001 } buckets { low_value: 80.8 high_value: 90.89999999999999 sample_count: 100.06200000000001 } buckets { low_value: 90.89999999999999 high_value: 101.0 sample_count: 109.242 } } histograms { buckets { high_value: 10.0 sample_count: 102.0 } buckets { low_value: 10.0 high_value: 20.0 sample_count: 102.0 } buckets { low_value: 20.0 high_value: 30.0 sample_count: 102.0 } buckets { low_value: 30.0 high_value: 40.0 sample_count: 102.0 } buckets { low_value: 40.0 high_value: 51.0 sample_count: 102.0 } buckets { low_value: 51.0 high_value: 61.0 sample_count: 102.0 } buckets { low_value: 61.0 high_value: 71.0 sample_count: 102.0 } buckets { low_value: 71.0 high_value: 81.0 sample_count: 102.0 } buckets { low_value: 81.0 high_value: 91.0 sample_count: 102.0 } buckets { low_value: 91.0 high_value: 101.0 sample_count: 102.0 } type: QUANTILES } } path { step: "label" } } } shard_lengths: 1020 num_bytes: 43474584 } splits { name: "validation" statistics { num_examples: 1020 features { type: STRING string_stats { common_stats { num_non_missing: 1020 min_num_values: 1 max_num_values: 1 avg_num_values: 1.0 num_values_histogram { buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } type: QUANTILES } tot_num_values: 1020 } unique: 1020 top_values { value: "image_08187.jpg" frequency: 1.0 } top_values { value: "image_08185.jpg" frequency: 1.0 } top_values { value: "image_08182.jpg" frequency: 1.0 } top_values { value: "image_08179.jpg" frequency: 1.0 } top_values { value: "image_08176.jpg" frequency: 1.0 } top_values { value: "image_08174.jpg" frequency: 1.0 } top_values { value: "image_08173.jpg" frequency: 1.0 } top_values { value: "image_08157.jpg" frequency: 1.0 } top_values { value: "image_08138.jpg" frequency: 1.0 } top_values { value: "image_08133.jpg" frequency: 1.0 } avg_length: 15.0 rank_histogram { buckets { label: "image_08187.jpg" sample_count: 1.0 } buckets { low_rank: 1 high_rank: 1 label: "image_08185.jpg" sample_count: 1.0 } buckets { low_rank: 2 high_rank: 2 label: "image_08182.jpg" sample_count: 1.0 } buckets { low_rank: 3 high_rank: 3 label: "image_08179.jpg" sample_count: 1.0 } buckets { low_rank: 4 high_rank: 4 label: "image_08176.jpg" sample_count: 1.0 } buckets { low_rank: 5 high_rank: 5 label: "image_08174.jpg" sample_count: 1.0 } buckets { low_rank: 6 high_rank: 6 label: "image_08173.jpg" sample_count: 1.0 } buckets { low_rank: 7 high_rank: 7 label: "image_08157.jpg" sample_count: 1.0 } buckets { low_rank: 8 high_rank: 8 label: "image_08138.jpg" sample_count: 1.0 } buckets { low_rank: 9 high_rank: 9 label: "image_08133.jpg" sample_count: 1.0 } } } path { step: "file_name" } } features { type: STRING string_stats { common_stats { num_non_missing: 1020 min_num_values: 1 max_num_values: 1 avg_num_values: 1.0 num_values_histogram { buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } type: QUANTILES } tot_num_values: 1020 } unique: 1020 top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } top_values { value: "__BYTES_VALUE__" frequency: 1.0 } avg_length: 42256.625 rank_histogram { buckets { label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 1 high_rank: 1 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 2 high_rank: 2 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 3 high_rank: 3 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 4 high_rank: 4 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 5 high_rank: 5 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 6 high_rank: 6 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 7 high_rank: 7 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 8 high_rank: 8 label: "__BYTES_VALUE__" sample_count: 1.0 } buckets { low_rank: 9 high_rank: 9 label: "__BYTES_VALUE__" sample_count: 1.0 } } } path { step: "image" } } features { num_stats { common_stats { num_non_missing: 1020 min_num_values: 1 max_num_values: 1 avg_num_values: 1.0 num_values_histogram { buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } buckets { low_value: 1.0 high_value: 1.0 sample_count: 102.0 } type: QUANTILES } tot_num_values: 1020 } mean: 50.5 std_dev: 29.443448620476957 num_zeros: 10 median: 51.0 max: 101.0 histograms { buckets { high_value: 10.1 sample_count: 109.242 } buckets { low_value: 10.1 high_value: 20.2 sample_count: 100.06200000000001 } buckets { low_value: 20.2 high_value: 30.299999999999997 sample_count: 100.06200000000001 } buckets { low_value: 30.299999999999997 high_value: 40.4 sample_count: 100.06200000000001 } buckets { low_value: 40.4 high_value: 50.5 sample_count: 100.06200000000001 } buckets { low_value: 50.5 high_value: 60.599999999999994 sample_count: 101.08200000000001 } buckets { low_value: 60.599999999999994 high_value: 70.7 sample_count: 100.06200000000001 } buckets { low_value: 70.7 high_value: 80.8 sample_count: 100.06200000000001 } buckets { low_value: 80.8 high_value: 90.89999999999999 sample_count: 100.06200000000001 } buckets { low_value: 90.89999999999999 high_value: 101.0 sample_count: 109.242 } } histograms { buckets { high_value: 10.0 sample_count: 102.0 } buckets { low_value: 10.0 high_value: 20.0 sample_count: 102.0 } buckets { low_value: 20.0 high_value: 30.0 sample_count: 102.0 } buckets { low_value: 30.0 high_value: 40.0 sample_count: 102.0 } buckets { low_value: 40.0 high_value: 51.0 sample_count: 102.0 } buckets { low_value: 51.0 high_value: 61.0 sample_count: 102.0 } buckets { low_value: 61.0 high_value: 71.0 sample_count: 102.0 } buckets { low_value: 71.0 high_value: 81.0 sample_count: 102.0 } buckets { low_value: 81.0 high_value: 91.0 sample_count: 102.0 } buckets { low_value: 91.0 high_value: 101.0 sample_count: 102.0 } type: QUANTILES } } path { step: "label" } } } shard_lengths: 1020 num_bytes: 43180278 } supervised_keys { input: "image" output: "label" } version: "2.1.1" download_size: 344878000
# Data Examination:
dataset= training_set, validation_set, test_set
print('dataset has type:', type(dataset))
print('dataset has {:,} elements '.format(len(dataset)))
dataset
dataset_info
dataset has type: <class 'tuple'> dataset has 3 elements
tfds.core.DatasetInfo(
name='oxford_flowers102',
version=2.1.1,
description='The Oxford Flowers 102 dataset is a consistent of 102 flower categories commonly occurring
in the United Kingdom. Each class consists of between 40 and 258 images. The images have
large scale, pose and light variations. In addition, there are categories that have large
variations within the category and several very similar categories.
The dataset is divided into a training set, a validation set and a test set.
The training set and validation set each consist of 10 images per class (totalling 1020 images each).
The test set consists of the remaining 6149 images (minimum 20 per class).',
homepage='https://www.robots.ox.ac.uk/~vgg/data/flowers/102/',
features=FeaturesDict({
'file_name': Text(shape=(), dtype=tf.string),
'image': Image(shape=(None, None, 3), dtype=tf.uint8),
'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=102),
}),
total_num_examples=8189,
splits={
'test': 6149,
'train': 1020,
'validation': 1020,
},
supervised_keys=('image', 'label'),
citation="""@InProceedings{Nilsback08,
author = "Nilsback, M-E. and Zisserman, A.",
title = "Automated Flower Classification over a Large Number of Classes",
booktitle = "Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing",
year = "2008",
month = "Dec"
}""",
redistribution_info=,
)
# Getting the number of examples in each set from the dataset info:
num_examples_train = dataset_info.splits['test'].num_examples
num_examples_validation = dataset_info.splits['validation'].num_examples
num_examples_test = dataset_info.splits['train'].num_examples
print('The number of examples in the training are {:,} '.format(num_examples_train))
print('The number of examples in the validation set are {:,} '.format(num_examples_validation))
print('The number of examples in the testing set are {:,}'.format(num_examples_test))
print("\n")
# Getting the number of classes in the dataset from the dataset info:
num_classes = dataset_info.features['label'].num_classes
print('The number of classes in the given dataset are {:,} '.format(num_classes))
The number of examples in the training are 6,149 The number of examples in the validation set are 1,020 The number of examples in the testing set are 1,020 The number of classes in the given dataset are 102
# TODO: Print the shape and corresponding label of 3 images in the training set.
for image, label in training_set.take(3):
print('The shape of one image in the training set is: ',image.shape)
print('The label of one image in the training set is : ', label.numpy())
The shape of one image in the training set is: (542, 500, 3) The label of one image in the training set is : 40 The shape of one image in the training set is: (748, 500, 3) The label of one image in the training set is : 76 The shape of one image in the training set is: (500, 600, 3) The label of one image in the training set is : 42
# Plotting 1 image from the training set and setting the title of the plot to the corresponding image label:
for image, label in training_set.take(1):
image = image.numpy()
label = label.numpy()
plt.imshow(image), plt.title(label)
plt.colorbar()
You'll also need to load in a mapping from label to category name. You can find this in the file label_map.json. It's a JSON object which you can read in with the json module. This will give you a dictionary mapping the integer coded labels to the actual names of the flowers.
json_dic = {"21": "fire lily", "3": "canterbury bells", "45": "bolero deep blue", "1": "pink primrose", "34": "mexican aster",
"27": "prince of wales feathers", "7": "moon orchid", "16": "globe-flower", "25": "grape hyacinth", "26": "corn poppy",
"79": "toad lily", "39": "siam tulip", "24": "red ginger", "67": "spring crocus", "35": "alpine sea holly", "32": "garden phlox",
"10": "globe thistle", "6": "tiger lily", "93": "ball moss", "33": "love in the mist", "9": "monkshood", "102": "blackberry lily",
"14": "spear thistle", "19": "balloon flower", "100": "blanket flower", "13": "king protea", "49": "oxeye daisy",
"15": "yellow iris", "61": "cautleya spicata", "31": "carnation", "64": "silverbush", "68": "bearded iris",
"63": "black-eyed susan", "69": "windflower", "62": "japanese anemone", "20": "giant white arum lily", "38": "great masterwort",
"4": "sweet pea", "86": "tree mallow", "101": "trumpet creeper", "42": "daffodil", "22": "pincushion flower",
"2": "hard-leaved pocket orchid", "54": "sunflower", "66": "osteospermum", "70": "tree poppy", "85": "desert-rose",
"99": "bromelia", "87": "magnolia", "5": "english marigold", "92": "bee balm", "28": "stemless gentian",
"97": "mallow", "57": "gaura", "40": "lenten rose", "47": "marigold", "59": "orange dahlia", "48": "buttercup",
"55": "pelargonium", "36": "ruby-lipped cattleya", "91": "hippeastrum", "29": "artichoke", "71": "gazania",
"90": "canna lily", "18": "peruvian lily", "98": "mexican petunia", "8": "bird of paradise", "30": "sweet william",
"17": "purple coneflower", "52": "wild pansy", "84": "columbine", "12": "colt's foot", "11": "snapdragon",
"96": "camellia", "23": "fritillary", "50": "common dandelion", "44": "poinsettia", "53": "primula", "72": "azalea",
"65": "californian poppy", "80": "anthurium", "76": "morning glory", "37": "cape flower", "56": "bishop of llandaff",
"60": "pink-yellow dahlia", "82": "clematis", "58": "geranium", "75": "thorn apple", "41": "barbeton daisy", "95": "bougainvillea",
"43": "sword lily", "83": "hibiscus", "78": "lotus lotus", "88": "cyclamen", "94": "foxglove", "81": "frangipani", "74": "rose", "89": "watercress",
"73": "water lily", "46": "wallflower", "77": "passion flower",
"51": "petunia"}
# Plotting 1 image from the training set. Set the title of the plot to the corresponding class name:
for image, label in training_set.take(1):
image = image.numpy()
label = label.numpy()
plt.imshow(image), plt.title(json_dic[str(label)])
plt.colorbar()
# Creating a pipeline for each set:
image_size = 224
def reshape_image(image, label):
image = tf.cast(image, tf.float32)
image = tf.image.resize(image, [image_size, image_size])
image /= 255
return image, label
# Printting the shape of the images for training, validation, and testing sets after images processing
for image, label in training_set.take(3):
print('The shape of three images in the training set is: ',reshape_image(image, label)[0].shape)
for image, label in validation_set.take(3):
print('The shape of three images in the validation set is: ',reshape_image(image, label)[0].shape)
for image, label in test_set.take(3):
print('The shape of three images in the testing set is: ',reshape_image(image, label)[0].shape)
batch_size = 32
training_batches = training_set.shuffle(num_examples_train//2).map(reshape_image).batch(batch_size).prefetch(1)
validation_batches = validation_set.map(reshape_image).batch(batch_size).prefetch(1)
testing_batches = test_set.map(reshape_image).batch(batch_size).prefetch(1)
The shape of three images in the training set is: (224, 224, 3) The shape of three images in the training set is: (224, 224, 3) The shape of three images in the training set is: (224, 224, 3) The shape of three images in the validation set is: (224, 224, 3) The shape of three images in the validation set is: (224, 224, 3) The shape of three images in the validation set is: (224, 224, 3) The shape of three images in the testing set is: (224, 224, 3) The shape of three images in the testing set is: (224, 224, 3) The shape of three images in the testing set is: (224, 224, 3)
Now that the data is ready, it's time to build and train the classifier. You should use the MobileNet pre-trained model from TensorFlow Hub to get the image features. Build and train a new feed-forward classifier using those features.
We're going to leave this part up to you. If you want to talk through it with someone, chat with your fellow students!
Refer to the rubric for guidance on successfully completing this section. Things you'll need to do:
We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!
When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right.
Note for Workspace users: One important tip if you're using the workspace to run your code: To avoid having your workspace disconnect during the long-running tasks in this notebook, please read in the earlier page in this lesson called Intro to GPU Workspaces about Keeping Your Session Active. You'll want to include code from the workspace_utils.py module. Also, If your model is over 1 GB when saved as a checkpoint, there might be issues with saving backups in your workspace. If your saved checkpoint is larger than 1 GB (you can open a terminal and check with ls -lh), you should reduce the size of your hidden layers and train again.
# Loading the MobileNet pre-trained model from TensorFlow Hub .
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
Features_extractor = hub.KerasLayer(URL, input_shape = (image_size,image_size,3))
Features_extractor.trainable = False
# Buliding the new model:
model = tf.keras.Sequential([Features_extractor, tf.keras.layers.Dense(num_classes, activation = 'softmax')])
model.summary()
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
print('\t\u2022 Running on GPU' if tf.test.is_gpu_available() else '\t\u2022 GPU device not found. Running on CPU')
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= keras_layer (KerasLayer) (None, 1280) 2257984 _________________________________________________________________ dense (Dense) (None, 102) 130662 ================================================================= Total params: 2,388,646 Trainable params: 130,662 Non-trainable params: 2,257,984 _________________________________________________________________ • Running on GPU
# Training the model:
Epochs = 30
Early_stopping = tf.keras.callbacks.EarlyStopping(monitor= 'val_loss' , patience= 3)
History = model.fit(training_batches,
epochs = Epochs,
validation_data = validation_batches,
callbacks = [Early_stopping])
Epoch 1/30 193/193 [==============================] - 65s 133ms/step - loss: 1.9659 - accuracy: 0.5920 - val_loss: 1.0149 - val_accuracy: 0.7882 Epoch 2/30 193/193 [==============================] - 29s 127ms/step - loss: 0.5455 - accuracy: 0.9011 - val_loss: 0.6333 - val_accuracy: 0.8686 Epoch 3/30 193/193 [==============================] - 29s 129ms/step - loss: 0.3128 - accuracy: 0.9507 - val_loss: 0.5221 - val_accuracy: 0.8725 Epoch 4/30 193/193 [==============================] - 29s 128ms/step - loss: 0.2069 - accuracy: 0.9714 - val_loss: 0.4485 - val_accuracy: 0.8971 Epoch 5/30 193/193 [==============================] - 29s 128ms/step - loss: 0.1472 - accuracy: 0.9844 - val_loss: 0.4496 - val_accuracy: 0.8922 Epoch 6/30 193/193 [==============================] - 29s 128ms/step - loss: 0.1105 - accuracy: 0.9904 - val_loss: 0.4128 - val_accuracy: 0.8971 Epoch 7/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0859 - accuracy: 0.9928 - val_loss: 0.3915 - val_accuracy: 0.8990 Epoch 8/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0672 - accuracy: 0.9972 - val_loss: 0.3766 - val_accuracy: 0.8990 Epoch 9/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0547 - accuracy: 0.9977 - val_loss: 0.3698 - val_accuracy: 0.9049 Epoch 10/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0441 - accuracy: 0.9990 - val_loss: 0.3746 - val_accuracy: 0.8980 Epoch 11/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0372 - accuracy: 0.9992 - val_loss: 0.3516 - val_accuracy: 0.9029 Epoch 12/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0308 - accuracy: 0.9998 - val_loss: 0.3491 - val_accuracy: 0.9059 Epoch 13/30 193/193 [==============================] - 28s 126ms/step - loss: 0.0262 - accuracy: 0.9998 - val_loss: 0.3586 - val_accuracy: 0.9039 Epoch 14/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0226 - accuracy: 0.9998 - val_loss: 0.3375 - val_accuracy: 0.9118 Epoch 15/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0198 - accuracy: 0.9998 - val_loss: 0.3376 - val_accuracy: 0.9078 Epoch 16/30 193/193 [==============================] - 29s 127ms/step - loss: 0.0174 - accuracy: 0.9997 - val_loss: 0.3387 - val_accuracy: 0.9049 Epoch 17/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0155 - accuracy: 0.9997 - val_loss: 0.3336 - val_accuracy: 0.9098 Epoch 18/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0136 - accuracy: 0.9997 - val_loss: 0.3329 - val_accuracy: 0.9137 Epoch 19/30 193/193 [==============================] - 29s 127ms/step - loss: 0.0114 - accuracy: 1.0000 - val_loss: 0.3353 - val_accuracy: 0.9147 Epoch 20/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0107 - accuracy: 0.9997 - val_loss: 0.3307 - val_accuracy: 0.9088 Epoch 21/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0100 - accuracy: 0.9997 - val_loss: 0.3303 - val_accuracy: 0.9127 Epoch 22/30 193/193 [==============================] - 29s 128ms/step - loss: 0.0092 - accuracy: 0.9997 - val_loss: 0.3353 - val_accuracy: 0.9088 Epoch 23/30 193/193 [==============================] - 29s 127ms/step - loss: 0.0079 - accuracy: 0.9997 - val_loss: 0.3362 - val_accuracy: 0.9098 Epoch 24/30 193/193 [==============================] - 29s 127ms/step - loss: 0.0068 - accuracy: 0.9998 - val_loss: 0.3283 - val_accuracy: 0.9127 Epoch 25/30 193/193 [==============================] - 29s 127ms/step - loss: 0.0061 - accuracy: 0.9998 - val_loss: 0.3340 - val_accuracy: 0.9137 Epoch 26/30 193/193 [==============================] - 29s 127ms/step - loss: 0.0054 - accuracy: 0.9998 - val_loss: 0.3310 - val_accuracy: 0.9147 Epoch 27/30 193/193 [==============================] - 28s 125ms/step - loss: 0.0054 - accuracy: 0.9997 - val_loss: 0.3260 - val_accuracy: 0.9157 Epoch 28/30 193/193 [==============================] - 29s 127ms/step - loss: 0.0046 - accuracy: 0.9998 - val_loss: 0.3370 - val_accuracy: 0.9118 Epoch 29/30 193/193 [==============================] - 29s 129ms/step - loss: 0.0041 - accuracy: 0.9998 - val_loss: 0.3221 - val_accuracy: 0.9157 Epoch 30/30 193/193 [==============================] - 29s 127ms/step - loss: 0.0038 - accuracy: 0.9998 - val_loss: 0.3342 - val_accuracy: 0.9157
It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. You should be able to reach around 70% accuracy on the test set if the model has been trained well.
# Plotting the loss and accuracy values achieved during training for the training and validation set:
Training_Accuracy = History.history['accuracy']
Validation_Accuracy = History.history['val_accuracy']
Training_Loss = History.history['loss']
Validation_Loss = History.history['val_loss']
epochs_range=range(Epochs)
plt.figure(figsize=(10, 10))
plt.subplot(2, 1, 1)
plt.plot(epochs_range, Training_Accuracy, label='Training Accuracy')
plt.plot(epochs_range, Validation_Accuracy, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training & Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(epochs_range, Training_Loss , label='Training Loss')
plt.plot(epochs_range, Validation_Loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training & Validation Loss')
plt.show()
# Printing the loss and accuracy values achieved on the entire test set:
LOSS, ACCURACY = model.evaluate(testing_batches)
print('The loss in the Testing Set is: {:,.3f}'.format(LOSS))
print('The accuracy of the Testing Set is: {:.3%}'.format(ACCURACY))
32/32 [==============================] - 4s 132ms/step - loss: 0.3965 - accuracy: 0.8873 The loss in the Testing Set is: 0.397 The accuracy of the Testing Set is: 88.725%
Now that your network is trained, save the model so you can load it later for making inference. In the cell below save your model as a Keras model (i.e. save it as an HDF5 file).
# Saving the trained model as a Keras model:
file_path = './oxford_flowers102_improved.h5'
model.save(file_path)
t = time.time()
Model_dir = './{}_oxford_flowers102_weights_improved'.format(int(t))
tf.saved_model.save(model, Model_dir)
Load the Keras model you saved above.
# Loading the Keras model:
Keras_model = tf.keras.models.load_model(file_path, custom_objects={'KerasLayer':hub.KerasLayer})
Keras_model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= keras_layer (KerasLayer) (None, 1280) 2257984 _________________________________________________________________ dense (Dense) (None, 102) 130662 ================================================================= Total params: 2,388,646 Trainable params: 130,662 Non-trainable params: 2,257,984 _________________________________________________________________
Now you'll write a function that uses your trained network for inference. Write a function called predict that takes an image, a model, and then returns the top $K$ most likely class labels along with the probabilities. The function call should look like:
probs, classes = predict(image_path, model, top_k)
If top_k=5 the output of the predict function should be something like this:
probs, classes = predict(image_path, model, 5)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
Your predict function should use PIL to load the image from the given image_path. You can use the Image.open function to load the images. The Image.open() function returns an Image object. You can convert this Image object to a NumPy array by using the np.asarray() function.
The predict function will also need to handle pre-processing the input image such that it can be used by your model. We recommend you write a separate function called process_image that performs the pre-processing. You can then call the process_image function from the predict function.
The process_image function should take in an image (in the form of a NumPy array) and return an image in the form of a NumPy array with shape (224, 224, 3).
First, you should convert your image into a TensorFlow Tensor and then resize it to the appropriate size using tf.image.resize.
Second, the pixel values of the input images are typically encoded as integers in the range 0-255, but the model expects the pixel values to be floats in the range 0-1. Therefore, you'll also need to normalize the pixel values.
Finally, convert your image back to a NumPy array using the .numpy() method.
# Creating the process_image function:
def process_image(image, image_size=224):
image = tf.convert_to_tensor(image)
image = tf.image.resize(image,(image_size, image_size))
image = tf.cast(image,tf.float32)
image /= 255
return image.numpy()
To check your process_image function we have provided 4 images in the ./test_images/ folder:
The code below loads one of the above images using PIL and plots the original image alongside the image produced by your process_image function. If your process_image function works, the plotted image should be the correct size.
# Loading one image:
image_path = '/content/sample_data/Images/orange_dahlia.jpg'
IMAGE = Image.open(image_path)
tested_image = np.asarray(IMAGE)
#Processing the selected image:
processed_tested_image = process_image(tested_image)
# Plotting the selected image before and after processing:
fig, (ax1, ax2) = plt.subplots(figsize=(10,10), ncols=2)
ax1.imshow(tested_image)
ax1.set_title('Original Image')
ax2.imshow(processed_tested_image)
ax2.set_title('Processed Image')
plt.tight_layout()
plt.show()
Once you can get images in the correct format, it's time to write the predict function for making inference with your model.
Remember, the predict function should take an image, a model, and then returns the top $K$ most likely class labels along with the probabilities. The function call should look like:
probs, classes = predict(image_path, model, top_k)
If top_k=5 the output of the predict function should be something like this:
probs, classes = predict(image_path, model, 5)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
Your predict function should use PIL to load the image from the given image_path. You can use the Image.open function to load the images. The Image.open() function returns an Image object. You can convert this Image object to a NumPy array by using the np.asarray() function.
Note: The image returned by the process_image function is a NumPy array with shape (224, 224, 3) but the model expects the input images to be of shape (1, 224, 224, 3). This extra dimension represents the batch size. We suggest you use the np.expand_dims() function to add the extra dimension.
# TODO: Create the predict function
def predict(image_path, model, top_k=1, image_size=224):
image = Image.open(image_path)
image = np.asarray(image)
processed_image = process_image(image, image_size)
expanded_image = np.expand_dims(processed_image, axis=0)
probes = model.predict(expanded_image)
top_k_values, top_k_indices = tf.nn.top_k(probes, k=top_k)
top_k_values = top_k_values.numpy()
top_k_indices = top_k_indices.numpy()
return top_k_values, top_k_indices, processed_image
It's always good to check the predictions made by your model to make sure they are correct. To check your predictions we have provided 4 images in the ./test_images/ folder:
In the cell below use matplotlib to plot the input image alongside the probabilities for the top 5 classes predicted by your model. Plot the probabilities as a bar graph. The plot should look like this:

You can convert from the class integer labels to actual flower names using class_names.
# Plotting the input image along with the top 5 classes:
images_paths = ['/content/sample_data/Images/cautleya_spicata.jpg',
'/content/sample_data/Images/hard-leaved_pocket_orchid.jpg',
'/content/sample_data/Images/orange_dahlia.jpg',
'/content/sample_data/Images/wild_pansy.jpg']
Top_Classes = 5
for img_path in images_paths:
path = img_path
top_values, top_indices, image = predict(path, Keras_model, Top_Classes)
print('propabilties:', top_values)
print('classes:', top_indices)
predicted_classes_names = []
print("predicted_classes_names:")
for i in top_indices[0]:
print(" - ",json_dic[str(i + 1)])
predicted_classes_names.append(json_dic[str(i + 1)])
fig, (ax1, ax2) = plt.subplots(figsize=(10,10), ncols=2)
ax1.imshow(image)
ax1.axis('off')
ax1.set_title(predicted_classes_names[0])
ax2.barh(top_indices[0], top_values[0])
ax2.set_aspect(0.1)
ax2.set_yticks(top_indices[0])
ax2.set_yticklabels(predicted_classes_names, size='small');
ax2.set_title('Classes Propabilties')
ax2.set_xlim(0, 1.1)
plt.tight_layout()
plt.show()
propabilties: [[9.9966109e-01 7.5003212e-05 5.8477941e-05 3.0567822e-05 2.4680916e-05]] classes: [[60 93 23 95 38]] predicted_classes_names: - cautleya spicata - foxglove - red ginger - camellia - siam tulip
propabilties: [[9.9981505e-01 1.7349416e-04 9.1614702e-06 6.4031673e-07 6.0430364e-07]] classes: [[ 1 17 76 6 79]] predicted_classes_names: - hard-leaved pocket orchid - peruvian lily - passion flower - moon orchid - anthurium
propabilties: [[9.9863428e-01 4.5959259e-04 4.3942465e-04 1.3001983e-04 9.6722200e-05]] classes: [[58 65 4 40 82]] predicted_classes_names: - orange dahlia - osteospermum - english marigold - barbeton daisy - hibiscus
propabilties: [[9.9999583e-01 1.0885099e-06 1.0008648e-06 4.3269904e-07 2.5686174e-07]] classes: [[51 52 85 18 63]] predicted_classes_names: - wild pansy - primula - tree mallow - balloon flower - silverbush